6 research outputs found
ROAM: Robust and Object-aware Motion Generation using Neural Pose Descriptors
Existing automatic approaches for 3D virtual character motion synthesis
supporting scene interactions do not generalise well to new objects outside
training distributions, even when trained on extensive motion capture datasets
with diverse objects and annotated interactions. This paper addresses this
limitation and shows that robustness and generalisation to novel scene objects
in 3D object-aware character synthesis can be achieved by training a motion
model with as few as one reference object. We leverage an implicit feature
representation trained on object-only datasets, which encodes an
SE(3)-equivariant descriptor field around the object. Given an unseen object
and a reference pose-object pair, we optimise for the object-aware pose that is
closest in the feature space to the reference pose. Finally, we use l-NSM,
i.e., our motion generation model that is trained to seamlessly transition from
locomotion to object interaction with the proposed bidirectional pose blending
scheme. Through comprehensive numerical comparisons to state-of-the-art methods
and in a user study, we demonstrate substantial improvements in 3D virtual
character motion and interaction quality and robustness to scenarios with
unseen objects. Our project page is available at
https://vcai.mpi-inf.mpg.de/projects/ROAM/.Comment: 12 pages, 10 figures; project page:
https://vcai.mpi-inf.mpg.de/projects/ROAM
State of the Art in Dense Monocular Non-Rigid 3D Reconstruction
3D reconstruction of deformable (or non-rigid) scenes from a set of monocular
2D image observations is a long-standing and actively researched area of
computer vision and graphics. It is an ill-posed inverse problem,
since--without additional prior assumptions--it permits infinitely many
solutions leading to accurate projection to the input 2D images. Non-rigid
reconstruction is a foundational building block for downstream applications
like robotics, AR/VR, or visual content creation. The key advantage of using
monocular cameras is their omnipresence and availability to the end users as
well as their ease of use compared to more sophisticated camera set-ups such as
stereo or multi-view systems. This survey focuses on state-of-the-art methods
for dense non-rigid 3D reconstruction of various deformable objects and
composite scenes from monocular videos or sets of monocular views. It reviews
the fundamentals of 3D reconstruction and deformation modeling from 2D image
observations. We then start from general methods--that handle arbitrary scenes
and make only a few prior assumptions--and proceed towards techniques making
stronger assumptions about the observed objects and types of deformations (e.g.
human faces, bodies, hands, and animals). A significant part of this STAR is
also devoted to classification and a high-level comparison of the methods, as
well as an overview of the datasets for training and evaluation of the
discussed techniques. We conclude by discussing open challenges in the field
and the social aspects associated with the usage of the reviewed methods.Comment: 25 page
Learning Unsupervised Cross-domain Image-to-Image Translation Using a Shared Discriminator
Unsupervised image-to-image translation is used to transform images from a
source domain to generate images in a target domain without using source-target
image pairs. Promising results have been obtained for this problem in an
adversarial setting using two independent GANs and attention mechanisms. We
propose a new method that uses a single shared discriminator between the two
GANs, which improves the overall efficacy. We assess the qualitative and
quantitative results on image transfiguration, a cross-domain translation task,
in a setting where the target domain shares similar semantics to the source
domain. Our results indicate that even without adding attention mechanisms, our
method performs at par with attention-based methods and generates images of
comparable quality
MoFusion: A Framework for Denoising-Diffusion-based Motion Synthesis
Conventional methods for human motion synthesis are either deterministic or
struggle with the trade-off between motion diversity and motion quality. In
response to these limitations, we introduce MoFusion, i.e., a new
denoising-diffusion-based framework for high-quality conditional human motion
synthesis that can generate long, temporally plausible, and semantically
accurate motions based on a range of conditioning contexts (such as music and
text). We also present ways to introduce well-known kinematic losses for motion
plausibility within the motion diffusion framework through our scheduled
weighting strategy. The learned latent space can be used for several
interactive motion editing applications -- like inbetweening, seed
conditioning, and text-based editing -- thus, providing crucial abilities for
virtual character animation and robotics. Through comprehensive quantitative
evaluations and a perceptual user study, we demonstrate the effectiveness of
MoFusion compared to the state of the art on established benchmarks in the
literature. We urge the reader to watch our supplementary video and visit
https://vcai.mpi-inf.mpg.de/projects/MoFusion.Comment: CVPR23, 11 pages, 6 figures, 2 tables; project page:
https://vcai.mpi-inf.mpg.de/projects/MoFusio